Goto

Collaborating Authors

 operationalizing machine learning


Operationalizing Machine Learning: An Interview Study

Shankar, Shreya, Garcia, Rolando, Hellerstein, Joseph M., Parameswaran, Aditya G.

arXiv.org Artificial Intelligence

Organizations rely on machine learning engineers (MLEs) to operationalize ML, i.e., deploy and maintain ML pipelines in production. The process of operationalizing ML, or MLOps, consists of a continual loop of (i) data collection and labeling, (ii) experimentation to improve ML performance, (iii) evaluation throughout a multi-staged deployment process, and (iv) monitoring of performance drops in production. When considered together, these responsibilities seem staggering -- how does anyone do MLOps, what are the unaddressed challenges, and what are the implications for tool builders? We conducted semi-structured ethnographic interviews with 18 MLEs working across many applications, including chatbots, autonomous vehicles, and finance. Our interviews expose three variables that govern success for a production ML deployment: Velocity, Validation, and Versioning. We summarize common practices for successful ML experimentation, deployment, and sustaining production performance. Finally, we discuss interviewees' pain points and anti-patterns, with implications for tool design.


Operationalizing Machine Learning from PoC to Production - KDnuggets

#artificialintelligence

Many companies use machine learning to help create a differentiator and grow their business. However, it's not easy to make machine learning work as it requires a balance between research and engineering. One can come up with a good innovative solution based on current research, but it might not go live due to engineering inefficiencies, cost and complexity. Most companies haven't seen much ROI from machine learning since the benefit is realized only when the models are in production. Let's dive into the challenges and best practices that one can follow to make machine learning work.

  accuracy, complexity, operationalizing machine learning, (12 more...)
  Country: Asia > India > Karnataka > Bengaluru (0.05)

Operationalizing Machine Learning for the Automotive Future

#artificialintelligence

It's no secret that global mobility ecosystems are changing rapidly. Like so many other industries, automakers are experiencing massive technology-driven shifts. The automobile itself drove radical societal changes in the 20th century, and current technological shifts are again quickly restructuring the way we think about transportation. The rapid progress in AI/ML has propelled the emergence of new mobility application scenarios that were unthinkable just a few years ago. These complex use cases require some rigorous MLOps planning.


Operationalizing Machine Learning at Enterprise Scale

#artificialintelligence

According to a McKinsey Global Survey, approximately 30% of executives reported active pilot projects, while 71% were expecting a significant increase in AI investment. However, the survey found that progress remained slow, most companies didn't have a clear strategy or infrastructure for sourcing data, and organizations were lacking the foundational building blocks to create value from AI at scale. Deploying AI in industrial operations is difficult for a variety of reasons – complex data management, challenging integration, enterprise security requirements, real-time analytics and capability to handle thousands of models in the production environment. However, a fundamental problem is finding skilled people to implement AI. To circumvent this issue, companies are relying on citizen data scientists – subject matter experts with domain expertise in operations – and providing them with advanced analytical tools.


Operationalizing Machine Learning at Enterprise Scale

#artificialintelligence

According to a McKinsey Global Survey, approximately 30% of executives reported active pilot projects, while 71% were expecting a significant increase in AI investment. However, the survey found that progress remained slow, most companies didn't have a clear strategy or infrastructure for sourcing data, and organizations were lacking the foundational building blocks to create value from AI at scale. Deploying AI in industrial operations is difficult for a variety of reasons – complex data management, challenging integration, enterprise security requirements, real-time analytics and capability to handle thousands of models in the production environment. However, a fundamental problem is finding skilled people to implement AI. To circumvent this issue, companies are relying on citizen data scientists – subject matter experts with domain expertise in operations – and providing them with advanced analytical tools.


Operationalizing Machine Learning

#artificialintelligence

Machine Learning (ML) powers an increasing number of the applications and services that we use daily. For organizations who are beginning to leverage datasets to generate business insights -- the next step after you've developed and trained your model is deploying the model to use in a production scenario. That could mean integration directly within an application or website, or it may mean making the model available as a service. As ML continues to mature the emphasis starts to shift from development towards deployment. You need to transition from developing models to real world production scenarios that are concerned with issues of inference performance, scaling, load balancing, training time, reproducibility and visibility.


Operationalizing Machine Learning - DZone AI

#artificialintelligence

Machine learning (ML) powers an increasing number of the applications and services that we use daily. For organizations who are beginning to leverage datasets to generate business insights, the next step after you've developed and trained your model is deploying the model to use in a production scenario. That could mean integration directly within an application or website, or it may mean making the model available as a service. As ML continues to mature, the emphasis starts to shift from development towards deployment, you need to transition from developing models to real-world production scenarios that are concerned with issues of inference performance, scaling, load balancing, training time, reproducibility, and visibility. In previous posts, we've explored the ability to save and load trained models with TensorFlow that allow them to be served for inference.


Smart, Connected Service: Operationalizing Machine Learning at the Edge

#artificialintelligence

In a recent installment of the Captain America movie series, Iron Man leverages machine learning during the heat of battle to predict fight patterns and optimize his engagement approach. The fact that Iron Man uses machine learning isn't that surprising. Machine learning, which simply refers to a form of artificial intelligence that enables a computer to learn by using algorithms that understand data and result in advanced predictions and recommendations, has been around for years. What caught my attention is the operating environment in which the machine learning capability is applied. Do you think Iron Man's superhuman abilities rely on a 4G or LTE connection to a machine learning algorithm in the cloud?